mit study find
ChatGPT's AI powers make better writers, MIT study finds
ChatGPT and its AI powers could help writers and office workers improve their writing quality and decrease the time spent on tasks, cutting out busy work in favor of better, more productive work. However, the MIT study that suggested these conclusions also warned that employers could use AI to increase layoffs, too. The paper, "Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence" by Shakked Noy and Whitney Zhang of the economics department at MIT, is considered a working paper and has not been peer-reviewed. Still, the conclusions it found about ChatGPT's AI chatbot technology are both fascinating--and troubling, especially when the study factored in how it affects workers. The two doctoral students split 444 college-educated professionals into two groups, and assigned them to write press releases, email, short reports, and analysis plans--a normal workday for many people.
Reinforcement learning frustrates humans in teamplay, MIT study finds
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Artificial intelligence has proven that complicated board and video games are no longer the exclusive domain of the human mind. From chess to Go to StarCraft, AI systems that use reinforcement learning algorithms have outperformed human world champions in recent years. But despite the high individual performance of RL agents, they can become frustrating teammates when paired with human players, according to a study by AI researchers at MIT Lincoln Laboratory. The study, which involved cooperation between humans and AI agents in the card game Hanabi, shows that players prefer the classic and predictable rule-based AI systems over complex RL systems.
MIT study finds 'systematic' labeling errors in popular AI benchmark datasets
The field of AI and machine learning is arguably built on the shoulders of a few hundred papers, many of which draw conclusions using data from a subset of public datasets. Large, labeled corpora have been critical to the success of AI in domains ranging from image classification to audio classification. That's because their annotations expose comprehensible patterns to machine learning algorithms, in effect telling machines what to look for in future datasets so they're able to make predictions. But while labeled data is usually equated with ground truth, datasets can -- and do -- contain errors. The processes used to construct corpora often involve some degree of automatic annotation or crowdsourcing techniques that are inherently error-prone.
MIT study finds labelling errors in datasets used to test AI
A team led by computer scientists from MIT examined ten of the most-cited datasets used to test machine learning systems. They found that around 3.4 percent of the data was inaccurate or mislabeled, which could cause problems in AI systems that use these datasets. The datasets, which have each been cited more than 100,000 times, include text-based ones from newsgroups, Amazon and IMDb. Errors emerged from issues like Amazon product reviews being mislabeled as positive when they were actually negative and vice versa. Some of the image-based errors result from mixing up animal species.